了解极端事件及其可能性是研究气候变化影响,风险评估,适应和保护生物的关键。在这项工作中,我们开发了一种方法来构建极端热浪的预测模型。这些模型基于卷积神经网络,对极长的8,000年气候模型输出进行了培训。由于极端事件之间的关系本质上是概率的,因此我们强调概率预测和验证。我们证明,深度神经网络适用于法国持续持续14天的热浪,快速动态驱动器提前15天(500 hpa地球电位高度场),并且在慢速较长的交货时间内,慢速物理时间驱动器(土壤水分)。该方法很容易实现和通用。我们发现,深神经网络选择了与北半球波数字3模式相关的极端热浪。我们发现,当将2米温度场添加到500 HPA地球电位高度和土壤水分场中时,2米温度场不包含任何新的有用统计信息。主要的科学信息是,训练深层神经网络预测极端热浪的发生是在严重缺乏数据的情况下发生的。我们建议大多数其他应用在大规模的大气和气候现象中都是如此。我们讨论了处理缺乏数据制度的观点,例如罕见的事件模拟,以及转移学习如何在后一种任务中发挥作用。
translated by 谷歌翻译
The estimation of the generalization error of classifiers often relies on a validation set. Such a set is hardly available in few-shot learning scenarios, a highly disregarded shortcoming in the field. In these scenarios, it is common to rely on features extracted from pre-trained neural networks combined with distance-based classifiers such as nearest class mean. In this work, we introduce a Gaussian model of the feature distribution. By estimating the parameters of this model, we are able to predict the generalization error on new classification tasks with few samples. We observe that accurate distance estimates between class-conditional densities are the key to accurate estimates of the generalization performance. Therefore, we propose an unbiased estimator for these distances and integrate it in our numerical analysis. We show that our approach outperforms alternatives such as the leave-one-out cross-validation strategy in few-shot settings.
translated by 谷歌翻译
映射近场污染物的浓度对于跟踪城市地区意外有毒羽状分散体至关重要。通过求解大部分湍流谱,大型模拟(LES)具有准确表示污染物浓度空间变异性的潜力。找到一种合成大量信息的方法,以提高低保真操作模型的准确性(例如,提供更好的湍流封闭条款)特别有吸引力。这是一个挑战,在多质量环境中,LES的部署成本高昂,以了解羽流和示踪剂分散如何随着各种大气和源参数的变化。为了克服这个问题,我们提出了一个合并正交分解(POD)和高斯过程回归(GPR)的非侵入性降低阶模型,以预测与示踪剂浓度相关的LES现场统计。通过最大的后验(MAP)过程,GPR HyperParameter是通过POD告知的最大后验(MAP)过程来优化组件的。我们在二维案例研究上提供了详细的分析,该案例研究对应于表面安装的障碍物上的湍流大气边界层流。我们表明,障碍物上游的近源浓度异质性需要大量的POD模式才能得到充分捕获。我们还表明,逐组分的优化允许捕获POD模式中的空间尺度范围,尤其是高阶模式中较短的浓度模式。如果学习数据库由至少五十至100个LES快照制成,则可以首先估算所需的预算,以朝着更逼真的大气分散应用程序迈进,因此减少订单模型的预测仍然可以接受。
translated by 谷歌翻译
This work aims at generating captions for soccer videos using deep learning. In this context, this paper introduces a dataset, model, and triple-level evaluation. The dataset consists of 22k caption-clip pairs and three visual features (images, optical flow, inpainting) for ~500 hours of \emph{SoccerNet} videos. The model is divided into three parts: a transformer learns language, ConvNets learn vision, and a fusion of linguistic and visual features generates captions. The paper suggests evaluating generated captions at three levels: syntax (the commonly used evaluation metrics such as BLEU-score and CIDEr), meaning (the quality of descriptions for a domain expert), and corpus (the diversity of generated captions). The paper shows that the diversity of generated captions has improved (from 0.07 reaching 0.18) with semantics-related losses that prioritize selected words. Semantics-related losses and the utilization of more visual features (optical flow, inpainting) improved the normalized captioning score by 28\%. The web page of this work: https://sites.google.com/view/soccercaptioning}{https://sites.google.com/view/soccercaptioning
translated by 谷歌翻译